Are we in a position to investigate whether non-human animals are conscious?

Greg Detre

Wednesday, 09 May, 2001

Prof. Steve Simpson

Animal Behaviour III

 

Griffin (1984) describes the disdain in which some scientists hold the attempts of �pre-science�, but he is right to defend the need to outline in theory the investigations that must happen in practice for science to progress. Studies of animal consciousness, and consciousness in general, could still be described as pre-scientific, insofar as there is no established methodology, a lack of rigidity in defined terms and confusion as to how to proceed experimentally.

Indeed, as Nagel (1974) and others have noted, consciousness can appear completely inaccessible to the scientific method, since the phenomenon being studied does not appear to be publicly verifiable, by definition. Consciousness is privileged, that is, only the subject has immediate access to it. It is almost as though everyone in the world was given a little black box, and everyone peeks secretly into their box, but cannot display the contents to anyone else. We, as scientists, can observe the reactions on people�s faces as they look in their boxes, and try and observe some systematic relation between the look on their faces (which we assume can tell us indirectly about the contents of the box) and some other observable feature, e.g. hair colour. Of course, the analogy with consciousness breaks down quite soon. In reality, we are peeking out of our little black boxes � they are our window to the world, rather than the other way round � consciousness isn�t just subjective � it is subjectivity. Weiskrantz limits his investigations in consciousness to the reportable � what the subjects says about his experience. Quizzing our subjects about the contents of their black boxes is probably the best means of ascertaining their contents. But of course, in the case of animal consciousness, animals cannot describe their internal states to us with language. Language is one of the most salient features of human behaviour that evidences mental activity. Indeed, the Turing test is notable as a sufficient test for human-level mentality. We are severely constrained in our experiments that we cannot simply ask a monkey whether he feels anything, or what he feels. Lastly, Dennett (1991) gives various examples of how what we are aware of, or at least remember being aware of, may be different to what we ultimately report, e.g. the phi phenomenon.

It seems that ethologists want to hold a series of implicit ideals about consciousness:

the Darwinian framework will not require substantial revision in order to incorporate the evolution of consciousness

the mechanism of consciousness is explicable at a biological level

consciousness plays a causal role, i.e. is not a casually inert epiphenomenon

the function it performs is adaptive, rather than simply being a side-effect or accident of evolution

this function gives rise to more intelligent, considered, premeditated behaviour in some way

the level of consciousness is an involuntary mechanism, almost like breathing, which seems to fluctuate according to the nature of the task in hand

consciousness is a binary phenomenon, i.e. it does not admit degrees

I think we can challenge most of these assumptions. The first is perhaps the most stable. Dawkins offers a choice between assuming that natural selection and evolutionary theory can explain the origin of consciousness, or assuming that consciousness is the sole aspect of life that cannot be explained by or did not arise through this mechanism. She makes the choice seem like a safe and easy one, but I have separated it into the two steps above, which makes it a potentially harder one. The first step is the step that Dawkins thinks she is making: can the fact of consciousness be accommodated within a Darwinian framework? I think that ultimately it can, but that it is because it is not a biological phenomenon. This is where the second step comes in: Dawkins moves immediately to assume that if it can be accommodated within a Darwinian framework, that it must be a biological phenomenon, and so admissible to biological methods. Let us digress to consider the theory of panpsychism briefly. Panpsychism is the doctrine that �all matter � has a psychical aspect or component� (New Shorter OED). It is analogous to pantheism or hylozoism, but rather than the �thesis of the pervasiveness of life in nature, substitutes the pervasiveness of sentience, experience or, in a broad sense, consciousness� (Routledge Encyclopaedia of Philosophy). If we were to ask a panpsychist his view, he might concur with the first step and disagree with the second. That is, he might be happy to say that an explanation of consciousness will not disrupt evolutionary theory, but that such an explanation will not be at the biological level, but at the sub-atomic, say. In this case, we cannot look to physiology to explain how consciousness arises, since there is mentality in every single particle or piece of matter. Panpsychism does give rise to some admittedly weird conclusions, like my arm being conscious distinct from my body, but it serves to demonstrate how consciousness could be accommodated within evolution and still not be biological phenomenon.

Epiphenomenalism is the doctrine that �consciousness or mental phenomena [are] by-products of the physical activity of the brain and nervous system that do not influence behaviour�. That is to say, our consciousness lets us view the mechanisms of our brain and body, without influencing the activity of our brain or nervous system in any way. It�s almost like watching a scene in front of you on live video � the two seem to be happening at the same time, and you could almost imagine that the events on the television screen could influence the physical events on the stage, but in actuality, the video is just a portrayal of what has already happened. Why couldn�t consciousness be like this? If it were to turn out that our mental events are causally inert, then it could not perform any function, adaptive or maladaptive, could not alter behaviour and could not be observed or measured. However, it would still exist � just because my conscious experience plays no role in altering my brain state or behaviour does not mean that I do not have conscious experience. Whether consciousness is an epiphenomenon or not, in humans and animals, the requirement it places on us to treat conscious beings with moral consideration remains the same.

An epiphenomenon can be more widely described as �a concomitant or by-product of something�. In this way, consciousness could be an evolutionary epiphenomenon, resulting for example from the increased complexity of social behaviour that Humphreys (1978) notes, or from the formation of internal models of other agents� behaviour. However, it might serve no adaptive function itself. If this were the case, then consciousness might be evolutionarily quite recent and restricted to a particular set of branches on the taxonomic tree. Indeed Jaynes (1976) considers the origin of consciousness to lie in what he terms �the breakdown of the bicameral mind�. He cites evidence from various historical sources, including the Iliad, to demonstrate that historical man literally lacked consciousness, behaving and functioning permanently much as we do when on �auto-pilot� (Baars 1988, Weiskrantz 1995 (the �British Weather Conversation Syndrome�)). In this case, consciousness plays no adaptive role since it didn�t evolve.

The examples of driving a car and playing a solo in a public concert are often given as extreme levels of unconscious and conscious behaviour. Often, we can travel many miles down familiar roads or motorways seemingly without conscious awareness. Of course, this could simply be that the memories of our awareness while driving have faded almost as quickly as they are laid down. On the other hand, when nervously playing an instrument in a concert, all of the deft hand movements that have become second-nature during practice suffer from the conscious attention that we suddenly pay them. Dawkins suggests that consciousness may help us with novel situations or where we have to plan, while we are able to function unconsciously in most other ways. Thus consciousness seems to attend certain actions, while others (like speech, learning and reason) seem quite unconscious.

Lastly, the question of whether non-human animals are conscious seems to indicate a binary possibility: either humans are the only remotely conscious form of life, or a few select other animals (higher primates, for example) are too. However, if we are prepared to think about a continuum of degrees and qualities of consciousness, then the question becomes at once easier and harder. It is easier because we can say that humans are the most conscious animals we know of, and then trace some line of dwindling consciousness all the way backwards. It is harder because we have to consider how the nature of consciousness could be any different to our current state. It requires us to be objective about the core of our subjectivity, and to free ourselves from the view �through our eyes only�. Greenfield employs a number of evocative techniques to help a human imagine what an inferior consciousness like an animal�s might be like. The most obvious ways include recall of one�s own limited mental state when extremely tired, intoxicated or performing a routine task. Dreams are another important means. Greenfield believes that consciousness may be related to macroscopic waves of activity that flood the brain, reduced in dreams, larger when lucid. Smaller brains, with smaller waves, might result in a fragmented, more imagistic, less intense conscious experience perhaps.

 

If you ask a philosopher whether animals are conscious, he may refer you back to the problem of other minds. Put simply, this is the epistemological question of whether or not, or by what means, we can be sure that other humans are conscious, as we are. The only fact about consciousness that we have is that we are ourselves conscious, and we know this as a result of our privileged access to our own subjective state. Since we cannot apply this to gain knowledge of others� subjective states, we assume that they have them as we do, since their behaviour is often so similar to ours. This is known as the �argument from analogy�. It assumes that other people�s mental states correspond to their behaviour in the same way that our mental states correspond to our behaviour � when they wrinkle up their face they feel disgust, when they scream �Ouch and snatch their hand away they feel pain etc. Put in these terms, the argument from analogy is largely discredited among philosophers. Its most obvious problem is that it is a generalisation from just one case, our own. Instead, philosophers now place more weight on the �other minds as theoretical entities� solution. Here, the justification is in the form of a hypothetic inference: that others have mental states is hypothesized to account for how they behave. That other people have minds is the best explanation we have for their behaviour, and so we assume it to be true. This requires no evidence from one�s own mental states and behaviour, avoiding the generalisation from one case. However, it does seem closely wedded to a functionalist view.

Thus it may be that although our immediate response would be to view consciousness as an on-off sort of property, (either one is conscious of the world around, or one is a zombie) even a little consideration throws up strong support of the idea that consciousness waxes and wanes, and even admits different complexions at different times and between different individuals and species. Given then that we have as much reason to impute to animals as to humans a level of consciousness correlate with observed complexity, I see two main routes that science could take in its attempt to quantify the levels of consciousness of animals. Both revolve around the �theoretical minds� solution to the problem of other minds, but in opposite directions. The first involves classifying behaviour � if an animal is capable of a certain level of complex behaviour, then we can assume that there must be a mentality of corresponding complexity. The second assesses the physiology of the mental mechanism itself � if the architecture and organisation of the brain is of a given complexity, then we can similarly assume that it gives rise to mentality of corresponding complexity.

 

1.       Assessing behaviour

So if we are going to be consistent in our appraisals of agenthood in the world around us, then we must ascribe a level of consciousness to other animals (including humans) on the basis of their behaviour. We cannot hope that employing a single, simple test will suffice, but rather we have to build up a picture of the internal state of the animal or person. As a rough guide, we might settle on the sort of tests that give results that tally with the average pet-owner�s view of the world. This is what the argument from analogy seeks to do, but it looks purely for similarities with us, with human behaviour.

However, we may one day have to broaden this approach, beyond just the similarities the given creature holds to us. We might look for similarities between creatures we consider to be fairly conscious with other creatures that are similar to them but relatively dissimilar to us, gradually building up a picture of the to the sort of behaviours we are looking for in a conscious animal. In the celebrated case of imagining what it is like to be a bat, we will necessarily have to consider how having echolocation and sleeping upside down will affect our mental world. It seems likely that bats won�t behave in some of the ways humans do, but do exhibit behaviour of a similar complexity or considered, premeditated nature. This will be difficult and probably highly speculative, especially at first. But how else would we be able to interpret dolphin behaviour ias another dolphin would, by analogy with the way humans appraise each other?

Finally, the Turing test provides an ideal example of a test for human-level intelligence and self-awareness, although it rests on full-scale command of syntactic language and cannot be scaled to make it appreciably easier. However, the basic idea of an imitation game might lead to a means of classifying levels of complex behaviour.

 

2.       Assessing brains

Alternatively, the best guide to the consciousness of another animal may prove more quantitative, be more obvious though harder to understand. The fact remains that our brains inextricably and powerfully linked with our mental world, in both causal directions. That is to say, if I damage or physically affect my brain, my thoughts are altered as a result. We can see this directly using electrode stimulation during brain operations, where patients report a huge variety of subjective feelings as a result, ranging from sensations to episodic memories. In the other direction, we can watch how different areas of the brain light up systematically on an fMRI scan as the patient is asked to perform different mental operations.

Our brains are crucially different to other animals. We can trace a path back to higher primates, through mammals to reptiles, and see how our brains have developed over evolutionary time. Notably, the evolutionarily recent furrowed neocortex has grown over the mammalian mid-brain, which in turn encompasses the reptilian hind-brain. At each stage, the added complexity and functionality has been adaptive and necessary for the co-ordination of behaviour of successively larger and more complex organisms. For instance, Humphreys (amongst many others) sought an explanation of the need for higher cognitive function in terms of growing complexity of social behaviour. He showed a strong correlation between proportional size of neocortex and average size of social group, implying that bigger groups required richer internal models to deal with the possibilities for deceit, kin recognition, cooperation etc.

I think that a functionalist, epiphenomenalist and materialist would all broadly concur that the scope, sophistication and intensity of conscious experience are related in some way to the complexity of brain state. This is not to say that the relation is systematic, or directly proportional. We should be able to hone a statistical measure of brain complexity that we can use as a best-guess indicator of conscious level. It might incorporate variables such as approximate total number of neurons, connectivity or total number of synapses, degree of myelination, proportion/width of neocortex, afferent and efferent neural bandwidth etc. Such a measure would certainly prove useful as an intermediate guide for animal rights legislators. It might be interesting to see whether such a measure correlated with the degree of consciousness people intuitively attribute to a given species.

 

Ultimately then, we are already in the position to compile statistics on animal consciousness, using these sort of techniques. I would be surprised if such a consciousness �ranking� table contained many surprises, given that it is simply making explicit the criteria we use all the time internally. However, the deeper epistemological question remains, in that there is still no conceivable means of viewing or measuring what it is actually like for the animal to be the animal.